Eigendecomposition of a matrix

In the mathematical discipline of linear algebra, eigendecomposition or sometimes spectral decomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way.

Contents

Fundamental theory of matrix eigenvectors and eigenvalues

A (non-zero) vector v of dimension N is an eigenvector of a square (N×N) matrix A if and only if it satisfies the linear equation

 \mathbf{A} \mathbf{v} = \lambda \mathbf{v}

where λ is a scalar, termed the eigenvalue corresponding to v. That is, the eigenvectors are the vectors which the linear transformation A merely elongates or shrinks, and the amount that they elongate/shrink by is the eigenvalue. The above equation is called the eigenvalue equation or the eigenvalue problem.

This yields an equation for the eigenvalues

 p\left(\lambda\right)�:= \det\left(\mathbf{A} - \lambda \mathbf{I}\right)= 0. \!\

We call p(λ) the characteristic polynomial, and the equation, called the characteristic equation, is an Nth order polynomial equation in the unknown λ. This equation will have Nλ distinct solutions, where 1 ≤ NλN . The set of solutions, i.e. the eigenvalues, is sometimes called the spectrum of A.

We can factor p as

p\left(\lambda\right)= (\lambda-\lambda_1)^{n_1}(\lambda-\lambda_2)^{n_2}\cdots(\lambda-\lambda_k)^{n_k} = 0. \!\

The integer ni is termed the algebraic multiplicity of eigenvalue λi. The algebraic multiplicities sum to N:

\sum\limits_{i=1}^{N_{\lambda}}{n_i} =N.

For each eigenvalue, λi, we have a specific eigenvalue equation

 \left(\mathbf{A} - \lambda_i \mathbf{I}\right)\mathbf{v}  = 0. \!\

There will be 1 ≤ mini linearly independent solutions to each eigenvalue equation. The mi solutions are the eigenvectors associated with the eigenvalue λi. The integer mi is termed the geometric multiplicity of λi. It is important to keep in mind that the algebraic multiplicity ni and geometric multiplicity mi may or may not be equal, but we always have mini. The simplest case is of course when mi = ni = 1. The total number of linearly independent eigenvectors, Nv, can be calculated by summing the geometric multiplicities

\sum\limits_{i=1}^{N_{\lambda}}{m_i} =N_{\mathbf{v}}.

The eigenvectors can be indexed by eigenvalues, i.e. using a double index, with vi,j being the jth eigenvector for the ith eigenvalue. The eigenvectors can also be indexed using the simpler notation of a single index vk, with k = 1, 2, ... , Nv.

Eigendecomposition of a matrix

Let A be a square (N×N) matrix with N linearly independent eigenvectors, q_i \,\, (i = 1, \dots, N). Then A can be factorized as

\mathbf{A}=\mathbf{Q}\mathbf{\Lambda}\mathbf{Q}^{-1}

where Q is the square (N×N) matrix whose ith column is the eigenvector q_i of A and Λ is the diagonal matrix whose diagonal elements are the corresponding eigenvalues, i.e., \Lambda_{ii}=\lambda_i. Note that only diagonalizable matrices can be factorized in this way. For example, the matrix \begin{pmatrix}
1 & 1 \\
0 & 1 \\
\end{pmatrix} cannot be diagonalized.

The eigenvectors q_i \,\, (i = 1, \dots, N) are usually normalized, but they need not be. A non-normalized set of eigenvectors, v_i \,\, (i = 1, \dots, N), can also be used as the columns of Q. That this is true can be understood by noting that the magnitude of the eigenvectors in Q gets canceled in the decomposition by the presence of Q−1.

Matrix inverse via eigendecomposition

If matrix A can be eigendecomposed and if none of its eigenvalues are zero, then A is nonsingular and its inverse is given by

\mathbf{A}^{-1}=\mathbf{Q}\mathbf{\Lambda}^{-1}\mathbf{Q}^{-1}

Because Λ is a diagonal matrix, its inverse is easy to calculate:

\left[\Lambda^{-1}\right]_{ii}=\frac{1}{\lambda_i}

Practical implications[1]

When eigendecomposition is used on a matrix of measured, real data, the inverse may be less valid when all eigenvalues are used unmodified in the form above. This is because as eigenvalues become relatively small, their contribution to the inversion is large. Those near zero or at the "noise" of the measurement system will have undue influence and could hamper solutions (detection) using the inverse.

Two mitigations have been proposed: 1) truncating small/zero eigenvalues, 2) extending the lowest reliable eigenvalue to those below it.

The first mitigation method is similar to a sparse sample of the original matrix, removing components that are not considered valuable. However, if the solution or detection process is near the noise level, truncating may remove components that influence the desired solution.

The second mitigation extends the eigenvalue so that lower values have much less influence over inversion, but do still contribute, such that solutions near the noise will still be found.

The reliable eigenvalue can be found by assuming that eigenvalues of extremely similar and low value are a good representation of measurement noise (which is assumed low for most systems).

If the eigenvalues are rank-sorted by value, then the reliable eigenvalue can be found by minimization of the Laplacian of the sorted eigenvalues[2]:

\min|\nabla^2 \lambda_s |

where the eigenvalues are subscripted with an 's' to denote being sorted. The position of the minimization is the lowest reliable eigenvalue. In measurement systems, the square root of this reliable eigenvalue is the average noise over the components of the system.

Functional calculus

The eigendecomposition allows for much easier computation of power series of matrices. If f(x) is given by

f(x)=a_0%2Ba_1 x%2Ba_2 x^2%2B\cdots

then we know that

f\left(\mathbf{A}\right)=\mathbf{Q}f\left(\mathbf{\Lambda}\right)\mathbf{Q}^{-1}

Because Λ is a diagonal matrix, functions of Λ are very easy to calculate:

\left[f\left(\mathbf{\Lambda}\right)\right]_{ii}=f\left(\lambda_i\right)

The off-diagonal elements of f(Λ) are zero; that is, f(Λ) is also a diagonal matrix. Therefore, calculating f(A) reduces to just calculating the function on each of the eigenvalues .

A similar technique works more generally with the holomorphic functional calculus, using

\mathbf{A}^{-1}=\mathbf{Q}\mathbf{\Lambda}^{-1}\mathbf{Q}^{-1}

from above. Once again, we find that

\left[f\left(\mathbf{\Lambda}\right)\right]_{ii}=f\left(\lambda_i\right)

Examples

\mathbf{A}^{2}=(\mathbf{Q}\mathbf{\Lambda}\mathbf{Q}^{-1})(\mathbf{Q}\mathbf{\Lambda}\mathbf{Q}^{-1}) = \mathbf{Q}\mathbf{\Lambda}(\mathbf{Q}^{-1}\mathbf{Q})\mathbf{\Lambda}\mathbf{Q}^{-1}=\mathbf{Q}\mathbf{\Lambda}^{2}\mathbf{Q}^{-1}
\mathbf{A}^{n}=\mathbf{Q}\mathbf{\Lambda}^{n}\mathbf{Q}^{-1}

Decomposition for special matrices

Symmetric matrices

Every N×N real nonsingular symmetric matrix has N linearly independent real eigenvectors. Moreover, these eigenvectors can be chosen such that they are orthogonal to each other and have norm one. Thus a real symmetric matrix A can be decomposed as

\mathbf{A}=\mathbf{Q}\mathbf{\Lambda}\mathbf{Q}^{T}

where Q is an orthonormal matrix, and Λ is a diagonal matrix whose entries are exactly the eigenvalues of A.

Normal matrices

Similarly, a complex normal matrix has an orthogonal eigenvector basis, so a normal matrix can be decomposed as

\mathbf{A}=\mathbf{U}\mathbf{\Lambda}\mathbf{U}^{*}

where U is a unitary matrix. Further, if A is Hermitian (which implies that it is also complex normal), the diagonal matrix Λ has only real values, and if A is unitary, Λ takes all its values on the complex unit circle.

Useful facts

Useful facts regarding eigenvalues

\det\left(\mathbf{A}\right) = \prod\limits_{i=1}^{N_{\lambda}}{\lambda_i^{n_i}} \!\

Note that each eigenvalue is raised to the power ni, the algebraic multiplicity.

\operatorname{tr}\left(\mathbf{A}\right) = \sum\limits_{i=1}^{N_{\lambda}}{{n_i}\lambda_i} \!\

Note that each eigenvalue is multiplied by ni, the algebraic multiplicity.

Useful facts regarding eigenvectors

Useful facts regarding eigendecomposition

N_{\mathbf{v}}=N \,

Useful facts regarding matrix inverse

\lambda_i \ne 0 \; \forall \,i
\mathbf{A}^{-1}=\mathbf{Q}\mathbf{\Lambda}^{-1}\mathbf{Q}^{-1}

Numerical computations

Numerical computation of eigenvalues

Suppose that we want to compute the eigenvalues of a given matrix. If the matrix is small, we can compute them symbolically using the characteristic polynomial. However, this is often impossible for larger matrices, in which case we must use a numerical method.

In practice, eigenvalues of large matrices are not computed using the characteristic polynomial. Computing the polynomial becomes expensive in itself, and exact (symbolic) roots of a high-degree polynomial can be difficult to compute and express: the Abel–Ruffini theorem implies that the roots of high-degree (5 and above) polynomials cannot in general be expressed simply using nth roots. Therefore, general algorithms to find eigenvectors and eigenvalues are iterative.

Iterative numerical algorithms for approximating roots of polynomials exist, such as Newton's method, but in general it is impractical to compute the characteristic polynomial and then apply these methods. One reason is that small round-off errors in the coefficients of the characteristic polynomial can lead to large errors in the eigenvalues and eigenvectors: the roots are an extremely ill-conditioned function of the coefficients.[3]

A simple and accurate iterative method is the power method: a random vector v is chosen and a sequence of unit vectors is computed as

\frac{Av}{\|Av\|}, \frac{A^2v}{\|A^2v\|}, \frac{A^3v}{\|A^3v\|}, \dots

This sequence will almost always converge to an eigenvector corresponding to the eigenvalue of greatest magnitude, provided that v has a nonzero component of this eigenvector in the eigenvector basis (and also provided that there is only one eigenvalue of greatest magnitude)). This simple algorithm is useful in some practical applications; for example, Google uses it to calculate the page rank of documents in their search engine.[4] Also, the power method is the starting point for many more sophisticated algorithms. For instance, by keeping not just the last vector in the sequence, but instead looking at the span of all the vectors in the sequence, one can get a better (faster converging) approximation for the eigenvector, and this idea is the basis of Arnoldi iteration.[3] Alternatively, the important QR algorithm is also based on a subtle transformation of a power method.[3]

Numerical computation of eigenvectors

Once the eigenvalues are computed, the eigenvectors could be calculated by solving the equation

 \left(\mathbf{A} - \lambda_i \mathbf{I}\right)\mathbf{v}_{i,j}  = 0 \!\

using Gaussian elimination or any other method for solving matrix equations.

However, in practical large-scale eigenvalue methods, the eigenvectors are usually computed in other ways, as a byproduct of the eigenvalue computation. In power iteration, for example, the eigenvector is actually computed before the eigenvalue (which is typically computed by the Rayleigh quotient of the eigenvector).[3] In the QR algorithm for a Hermitian matrix (or any normal matrix), the orthonormal eigenvectors are obtained as a product of the Q matrices from the steps in the algorithm.[3] (For more general matrices, the QR algorithm yields the Schur decomposition first, from which the eigenvectors can be obtained by a backsubstitution procedure.[5]) For Hermitian matrices, the Divide-and-conquer eigenvalue algorithm is more efficient than the QR algorithm if both eigenvectors and eigenvalues are desired.[3]

Additional topics

Generalized eigenspaces

Recall that the geometric multiplicity of an eigenvalue can be described as the dimension of the associated eigenspace, the nullspace of λI − A. The algebraic multiplicity can also be thought of as a dimension: it is the dimension of the associated generalized eigenspace (1st sense), which is the nullspace of the matrix (λI − A)k for any sufficiently large k. That is, it is the space of generalized eigenvectors (1st sense), where a generalized eigenvector is any vector which eventually becomes 0 if λI − A is applied to it enough times successively. Any eigenvector is a generalized eigenvector, and so each eigenspace is contained in the associated generalized eigenspace. This provides an easy proof that the geometric multiplicity is always less than or equal to the algebraic multiplicity.

This usage should not be confused with the generalized eigenvalue problem described below.

Conjugate eigenvector

A conjugate eigenvector or coneigenvector is a vector sent after transformation to a scalar multiple of its conjugate, where the scalar is called the conjugate eigenvalue or coneigenvalue of the linear transformation. The coneigenvectors and coneigenvalues represent essentially the same information and meaning as the regular eigenvectors and eigenvalues, but arise when an alternative coordinate system is used. The corresponding equation is

Av = \lambda v^*.\,

For example, in coherent electromagnetic scattering theory, the linear transformation A represents the action performed by the scattering object, and the eigenvectors represent polarization states of the electromagnetic wave. In optics, the coordinate system is defined from the wave's viewpoint, known as the Forward Scattering Alignment (FSA), and gives rise to a regular eigenvalue equation, whereas in radar, the coordinate system is defined from the radar's viewpoint, known as the Back Scattering Alignment (BSA), and gives rise to a coneigenvalue equation.

Generalized eigenvalue problem

A generalized eigenvalue problem (2nd sense) is the problem of finding a vector v that obeys

 A\mathbf{v} = \lambda B \mathbf{v} \quad \quad

where A and B are matrices. If v obeys this equation, with some λ, then we call v the generalized eigenvector of A and B (in the 2nd sense), and λ is called the generalized eigenvalue of A and B (in the 2nd sense) which corresponds to the generalized eigenvector v. The possible values of λ must obey the following equation

\det(A - \lambda B)=0.\,

In the case we can find n\in\mathbb{N} linearly independent vectors  \{\mathbf{v_1}\ ,\dots, \mathbf{v_n}\} so that for every i\in\{1,\dots,n\},  A\mathbf{v_i} = \lambda_i B \mathbf{v_i}  \quad, where \lambda_i\in\mathbb{F} then we define the matrices P and D such that

P=
\begin{pmatrix}
  | & & | \\
  \mathbf{v_1} & \cdots & \mathbf{v_n}   \\
  | &  & | \\ 
\end{pmatrix}

\begin{pmatrix}
  (\mathbf{v_1})_1 & \cdots & (\mathbf{v_n})_1 \\
  \vdots &  & \vdots   \\
  (\mathbf{v_1})_n & \cdots & (\mathbf{v_n})_n \\ 
\end{pmatrix}


(D)_{ij} = 
\begin{cases} 
  \lambda_i,  & \text{if }i = j\\
  0, & \text{else}
\end{cases}

Then the following equality holds


\mathbf{A}=\mathbf{B}\mathbf{P}\mathbf{D}\mathbf{P^{-1}} \qquad \qquad

And the proof is


\mathbf{A}\mathbf{P}=
\mathbf{A}
\begin{pmatrix}
  | & & | \\
  \mathbf{v_1} & \cdots & \mathbf{v_n}   \\
  | &  & | \\ 
\end{pmatrix}
=
\begin{pmatrix}
  | & & | \\
  A\mathbf{v_1} & \cdots & A\mathbf{v_n}   \\
  | &  & | \\ 
\end{pmatrix}
=


\begin{pmatrix}
  | & & | \\
  \lambda_1B\mathbf{v_1} & \cdots & \lambda_nB\mathbf{v_n}   \\
  | &  & | \\ 
\end{pmatrix}
=

\begin{pmatrix}
  | & & | \\
  B\mathbf{v_1} & \cdots & B\mathbf{v_n}   \\
  | &  & | \\ 
\end{pmatrix}
\mathbf{D}
=
\mathbf{B}\mathbf{P}\mathbf{D}

And since P is invertible, we multiply the equation from the right by its inverse and QED.

The set of matrices of the form A - \lambda B, where  \lambda is a complex number, is called a pencil; the term matrix pencil can also refer to the pair (A,B) of matrices.[6] If B is invertible, then the original problem can be written in the form

 B^{-1}Av = \lambda v \quad \quad

which is a standard eigenvalue problem. However, in most situations it is preferable not to perform the inversion, but rather to solve the generalized eigenvalue problem as stated originally. This is especially important if A and B are Hermitian matrices, since in this case B^{-1}A is not generally Hermitian and important properties of the solution are no longer apparent.

If A and B are Hermitian and B is a positive-definite matrix, the eigenvalues λ are real and eigenvectors v1 and v2 with distinct eigenvalues are B-orthogonal (v_1^* B v_2 = 0). Also, in this case it is guaranteed that there exists a basis of generalized eigenvectors (it is not a defective problem).[6] This case is sometimes called a Hermitian definite pencil or definite pencil.[6]

See also

References

  1. ^ Hayde, A. F.; Twede, D. R.. "Observations on relationship between eigenvalues, instrument noise and detection performance". http://adsabs.harvard.edu/abs/2002SPIE.4816..355H. 
  2. ^ Twede, D. R.; Hayde, A. F.. "Refinement and generalization of the extension method of covariance matrix inversion by regularization". http://adsabs.harvard.edu/abs/2004SPIE.5159..299T. 
  3. ^ a b c d e f Lloyd N. Trefethen and David Bau, Numerical Linear Algebra (SIAM, 1997).
  4. ^ Ipsen, Ilse, and Rebecca M. Wills, Analysis and Computation of Google's PageRank, 7th IMACS International Symposium on Iterative Methods in Scientific Computing, Fields Institute, Toronto, Canada, 5–8 May 2005.
  5. ^ Alfio Quarteroni, Riccardo Sacco, Fausto Saleri, Numerical Mathematics (Springer, 2000), section 5.8.2.
  6. ^ a b c Z. Bai, J. Demmel, J. Dongarra, A. Ruhe, and H. Van Der Vorst, ed (2000). "Generalized Hermitian Eigenvalue Problems". Templates for the Solution of Algebraic Eigenvalue Problems: A Practical Guide. Philadelphia: SIAM. ISBN 0898714710. http://www.cs.utk.edu/~dongarra/etemplates/node156.html. 

Bibliography

External links